Goto

Collaborating Authors

 infrared camera


DIDLM:A Comprehensive Multi-Sensor Dataset with Infrared Cameras, Depth Cameras, LiDAR, and 4D Millimeter-Wave Radar in Challenging Scenarios for 3D Mapping

Gong, WeiSheng, He, Chen, Su, KaiJie, Li, QingYong

arXiv.org Artificial Intelligence

This study presents a comprehensive multi-sensor dataset designed for 3D mapping in challenging indoor and outdoor environments. The dataset comprises data from infrared cameras, depth cameras, LiDAR, and 4D millimeter-wave radar, facilitating exploration of advanced perception and mapping techniques. Integration of diverse sensor data enhances perceptual capabilities in extreme conditions such as rain, snow, and uneven road surfaces. The dataset also includes interactive robot data at different speeds indoors and outdoors, providing a realistic background environment. Slam comparisons between similar routes are conducted, analyzing the influence of different complex scenes on various sensors. Various SLAM algorithms are employed to process the dataset, revealing performance differences among algorithms in different scenarios. In summary, this dataset addresses the problem of data scarcity in special environments, fostering the development of perception and mapping algorithms for extreme conditions. Leveraging multi-sensor data including infrared, depth cameras, LiDAR, 4D millimeter-wave radar, and robot interactions, the dataset advances intelligent mapping and perception capabilities.Our dataset is available at https://github.com/GongWeiSheng/DIDLM.


ECMD: An Event-Centric Multisensory Driving Dataset for SLAM

Chen, Peiyu, Guan, Weipeng, Huang, Feng, Zhong, Yihan, Wen, Weisong, Hsu, Li-Ta, Lu, Peng

arXiv.org Artificial Intelligence

Leveraging multiple sensors enhances complex environmental perception and increases resilience to varying luminance conditions and high-speed motion patterns, achieving precise localization and mapping. This paper proposes, ECMD, an event-centric multisensory dataset containing 81 sequences and covering over 200 km of various challenging driving scenarios including high-speed motion, repetitive scenarios, dynamic objects, etc. ECMD provides data from two sets of stereo event cameras with different resolutions (640*480, 346*260), stereo industrial cameras, an infrared camera, a top-installed mechanical LiDAR with two slanted LiDARs, two consumer-level GNSS receivers, and an onboard IMU. Meanwhile, the ground-truth of the vehicle was obtained using a centimeter-level high-accuracy GNSS-RTK/INS navigation system. All sensors are well-calibrated and temporally synchronized at the hardware level, with recording data simultaneously. We additionally evaluate several state-of-the-art SLAM algorithms for benchmarking visual and LiDAR SLAM and identifying their limitations. The dataset is available at https://arclab-hku.github.io/ecmd/.


Pohang Canal Dataset: A Multimodal Maritime Dataset for Autonomous Navigation in Restricted Waters

Chung, Dongha, Kim, Jonghwi, Lee, Changyu, Kim, Jinwhan

arXiv.org Artificial Intelligence

This paper presents a multimodal maritime dataset and the data collection procedure used to gather it, which aims to facilitate autonomous navigation in restricted water environments. The dataset comprises measurements obtained using various perception and navigation sensors, including a stereo camera, an infrared camera, an omnidirectional camera, three LiDARs, a marine radar, a global positioning system, and an attitude heading reference system. The data were collected along a 7.5-km-long route that includes a narrow canal, inner and outer ports, and near-coastal areas in Pohang, South Korea. The collection was conducted under diverse weather and visual conditions. The dataset and its detailed description are available for free download at https://sites.google.com/view/pohang-canal-dataset.


COROID: A Crowdsourcing-based Companion Drones to Tackle Current and Future Pandemics

Rauniyar, Ashish, Hagos, Desta Haileselassie, Jha, Debesh, Håkegård, Jan Erik

arXiv.org Artificial Intelligence

Due to the current COVID-19 virus, which has already been declared a pandemic by the World Health Organization (WHO), we are witnessing the greatest pandemic of the decade. Millions of people are being infected, resulting in thousands of deaths every day across the globe. Even it was difficult for the best healthcare-providing countries could not handle the pandemic because of the strain of treating thousands of patients at a time. The count of infections and deaths is increasing at an alarming rate because of the spread of the virus. We believe that innovative technologies could help reduce pandemics to a certain extent until we find a definite solution from the medical field to handle and treat such pandemic situations. Technology innovation has the potential to introduce new technologies that could support people and society during these difficult times. Therefore, this paper proposes the idea of using drones as a companion to tackle current and future pandemics. Our COROID drone is based on the principle of crowdsourcing sensors data of the public's smart devices, which can correlate the reading of the infrared cameras equipped on the COROID drones. To the best of our knowledge, this concept has yet to be investigated either as a concept or as a product. Therefore, we believe that the COROID drone is innovative and has a huge potential to tackle COVID-19 and future pandemics.


A Night to Behold: Researchers Use Deep Learning to Bring Color to Night Vision

#artificialintelligence

A team of scientists has used GPU-accelerated deep learning to show how color can be brought to night-vision systems. In a paper published this week in the journal PLOS One, a team of researchers at the University of California, Irvine led by Professor Pierre Baldi and Dr. Andrew Browne, describes how they reconstructed color images of photos of faces using an infrared camera. The study is a step toward predicting and reconstructing what humans would see using cameras that collect light using imperceptible near-infrared illumination. The study's authors explain that humans see light in the so-called "visible spectrum," or light with wavelengths of between 400 and 700 nanometers. Typical night vision systems rely on cameras that collect infrared light outside this spectrum that we can't see.


NYK Tests AI System to Automatically Identify Navigation Hazards

#artificialintelligence

Efforts are continuing to explore the use of automation, artificial intelligence, and image recognition to improve the navigation and safety of ship operations. Earlier this year, Japan's Mitsui O.S.K. Lines demonstrated its efforts are using augmented reality (AR) technology to enhance navigational awareness and now NYK announced that it has begun a trial on the system that can monitor the horizon to recognize dangerous objects that might be within a ship's range. NYK working with its strategic research and development subsidiary MTI Co. installed the Automatic Ship Target recognition System developed in Israel by Orca AI on one of NYK's vessels. The goal is to verify the detection capability and the contribution the system can make to the role of the lookout on a ship's bridge. Working with Orca, NYK also plans to improve the target detection algorithm through the use of data collection and machine learning on the Israeli company's servers.


Cockroaches could be steered remotely for search and rescue missions

Daily Mail - Science & tech

Scientists have demonstrated how a live cockroach equipped with a computerised'backpack' could be steered remotely for search and rescue missions. The backpack, created by a team at Nanyang Technological University in Singapore, is a small computer chip fitted with an infrared camera, carbon dioxide sensor and a temperature/humidity sensor, among other functions. In lab trials, the team fitted the backpack to a Madagascar hissing cockroach and successfully used it to find humans in a simulated disaster scene. The cockroach fitted with the backpack also had electrodes implanted in its cerci – the protruding appendages on its left and right side. Electrical currents were delivered to the two cerci via the electrodes to induce turning, allowing the scientists to control the direction it moved in.


Robots equipped with infrared cameras could patrol holiday destinations under new EU plans

Daily Mail - Science & tech

Robots and drones equipped with infrared cameras could patrol holiday destinations and enforce social distancing rules under new EU plans to save the summer break. European Commission tourism proposals imaging'artificial intelligence and robotics [to] underpin public health measures', alongside infection tracing mobile apps. Automatons could appear in places like airports, beaches, resorts and restaurants to make sure that people keep at least 5 feet (1.5 metres) away from each other. On-board infrared cameras could allow the robots to measure people's temperatures from a distance and identify people with a fever that need to self-isolate. The plans come after Singapore employed a Boston Dynamics Spot robot to roam parks, broadcasting a message reminding pedestrians to keep their distance.


Intel RealSense 3D Camera for robotics & SLAM (with code)

Robohub

The Intel RealSense cameras have been gaining in popularity for the past few years for use as a 3D camera and for visual odometry. I had the chance to hear a presentation from Daniel Piro about using the Intel RealSense cameras generally and for SLAM (Simultaneous Localization and Mapping). The following post is based on his talk. Depth information is important since that gives us the information needed to understand shapes, sizes, and distance. This lets us (or a robot) know how far it is from items to avoid running into things and to plan path around obstacles in the image field of view.


Using Artificial Intelligence For Smarter Recycling - GE

#artificialintelligence

Filled with intricate mazes of high-speed conveyor belts carrying yesterday's garbage, high-tech recycling centers use sophisticated sensors to sort plastic from paper from aluminum. While this technology may streamline sorting, it's not smart or nimble enough to finish the job. Behind the scenes, recycling workers continue to sort the materials, making sure cereal boxes don't mix with soda cans. But the future of smart recycling is looking brighter. Spider-like robotic arms, guided by cameras and artificial intelligence (AI) -- think of it as facial-recognition technology for garbage -- are helping to make municipal recycling facilities (MRFs) run more efficiently.